15 research outputs found

    PERSON RE-IDENTIFICATION USING RGB-DEPTH CAMERAS

    Full text link
    [EN] The presence of surveillance systems in our lives has drastically increased during the last years. Camera networks can be seen in almost every crowded public and private place, which generate huge amount of data with valuable information. The automatic analysis of data plays an important role to extract relevant information from the scene. In particular, the problem of person re-identification is a prominent topic that has become of great interest, specially for the fields of security or marketing. However, there are some factors, such as changes in the illumination conditions, variations in the person pose, occlusions or the presence of outliers that make this topic really challenging. Fortunately, the recent introduction of new technologies such as depth cameras opens new paradigms in the image processing field and brings new possibilities. This Thesis proposes a new complete framework to tackle the problem of person re-identification using commercial rgb-depth cameras. This work includes the analysis and evaluation of new approaches for the modules of segmentation, tracking, description and matching. To evaluate our contributions, a public dataset for person re-identification using rgb-depth cameras has been created. Rgb-depth cameras provide accurate 3D point clouds with color information. Based on the analysis of the depth information, an novel algorithm for person segmentation is proposed and evaluated. This method accurately segments any person in the scene, and naturally copes with occlusions and connected people. The segmentation mask of a person generates a 3D person cloud, which can be easily tracked over time based on proximity. The accumulation of all the person point clouds over time generates a set of high dimensional color features, named raw features, that provides useful information about the person appearance. In this Thesis, we propose a family of methods to extract relevant information from the raw features in different ways. The first approach compacts the raw features into a single color vector, named Bodyprint, that provides a good generalisation of the person appearance over time. Second, we introduce the concept of 3D Bodyprint, which is an extension of the Bodyprint descriptor that includes the angular distribution of the color features. Third, we characterise the person appearance as a bag of color features that are independently generated over time. This descriptor receives the name of Bag of Appearances because its similarity with the concept of Bag of Words. Finally, we use different probabilistic latent variable models to reduce the feature vectors from a statistical perspective. The evaluation of the methods demonstrates that our proposals outperform the state of the art.[ES] La presencia de sistemas de vigilancia se ha incrementado notablemente en los últimos anños. Las redes de videovigilancia pueden verse en casi cualquier espacio público y privado concurrido, lo cual genera una gran cantidad de datos de gran valor. El análisis automático de la información juega un papel importante a la hora de extraer información relevante de la escena. En concreto, la re-identificación de personas es un campo que ha alcanzado gran interés durante los últimos años, especialmente en seguridad y marketing. Sin embargo, existen ciertos factores, como variaciones en las condiciones de iluminación, variaciones en la pose de la persona, oclusiones o la presencia de artefactos que hacen de este campo un reto. Afortunadamente, la introducción de nuevas tecnologías como las cámaras de profundidad plantea nuevos paradigmas en la visión artificial y abre nuevas posibilidades. En esta Tesis se propone un marco completo para abordar el problema de re-identificación utilizando cámaras rgb-profundidad. Este trabajo incluye el análisis y evaluación de nuevos métodos de segmentación, seguimiento, descripción y emparejado de personas. Con el fin de evaluar las contribuciones, se ha creado una base de datos pública para re-identificación de personas usando estas cámaras. Las cámaras rgb-profundidad proporcionan nubes de puntos 3D con información de color. A partir de la información de profundidad, se propone y evalúa un nuevo algoritmo de segmentación de personas. Este método segmenta de forma precisa cualquier persona en la escena y resuelve de forma natural problemas de oclusiones y personas conectadas. La máscara de segmentación de una persona genera una nube de puntos 3D que puede ser fácilmente seguida a lo largo del tiempo. La acumulación de todas las nubes de puntos de una persona a lo largo del tiempo genera un conjunto de características de color de grandes dimensiones, denominadas características base, que proporcionan información útil de la apariencia de la persona. En esta Tesis se propone una familia de métodos para extraer información relevante de las características base. La primera propuesta compacta las características base en un vector único de color, denominado Bodyprint, que proporciona una buena generalización de la apariencia de la persona a lo largo del tiempo. En segundo lugar, se introducen los Bodyprints 3D, definidos como una extensión de los Bodyprints que incluyen información angular de las características de color. En tercer lugar, la apariencia de la persona se caracteriza mediante grupos de características de color que se generan independientemente a lo largo del tiempo. Este descriptor recibe el nombre de Grupos de Apariencias debido a su similitud con el concepto de Grupos de Palabras. Finalmente, se proponen diferentes modelos probabilísticos de variables latentes para reducir los vectores de características desde un punto de vista estadístico. La evaluación de los métodos demuestra que nuestras propuestas superan los métodos del estado del arte.[CA] La presència de sistemes de vigilància s'ha incrementat notòriament en els últims anys. Les xarxes de videovigilància poden veure's en quasi qualsevol espai públic i privat concorregut, la qual cosa genera una gran quantitat de dades de gran valor. L'anàlisi automàtic de la informació pren un paper important a l'hora d'extraure informació rellevant de l'escena. En particular, la re-identificaciò de persones és un camp que ha aconseguit gran interès durant els últims anys, especialment en seguretat i màrqueting. No obstant, hi ha certs factors, com variacions en les condicions d'il.luminació, variacions en la postura de la persona, oclusions o la presència d'artefactes que fan d'aquest camp un repte. Afortunadament, la introducció de noves tecnologies com les càmeres de profunditat, planteja nous paradigmes en la visió artificial i obri noves possibilitats. En aquesta Tesi es proposa un marc complet per abordar el problema de la re-identificació mitjançant càmeres rgb-profunditat. Aquest treball inclou l'anàlisi i avaluació de nous mètodes de segmentació, seguiment, descripció i emparellat de persones. Per tal d'avaluar les contribucions, s'ha creat una base de dades pública per re-identificació de persones emprant aquestes càmeres. Les càmeres rgb-profunditat proporcionen núvols de punts 3D amb informació de color. A partir de la informació de profunditat, es defineix i s'avalua un nou algorisme de segmentació de persones. Aquest mètode segmenta de forma precisa qualsevol persona en l'escena i resol de forma natural problemes d'oclusions i persones connectades. La màscara de segmentació d'una persona genera un núvol de punts 3D que pot ser fàcilment seguida al llarg del temps. L'acumulació de tots els núvols de punts d'una persona al llarg del temps genera un conjunt de característiques de color de grans dimensions, anomenades característiques base, que hi proporcionen informació útil de l'aparença de la persona. En aquesta Tesi es proposen una família de mètodes per extraure informació rellevant de les característiques base. La primera proposta compacta les característiques base en un vector únic de color, anomenat Bodyprint, que proporciona una bona generalització de l'aparença de la persona al llarg del temps. En segon lloc, s'introdueixen els Bodyprints 3D, definits com una extensió dels Bodyprints que inclouen informació angular de les característiques de color. En tercer lloc, l'aparença de la persona es caracteritza amb grups de característiques de color que es generen independentment a llarg del temps. Aquest descriptor reb el nom de Grups d'Aparences a causa de la seua similitud amb el concepte de Grups de Paraules. Finalment, es proposen diferents models probabilístics de variables latents per reduir els vectors de característiques des d'un punt de vista estadístic. L'avaluació dels mètodes demostra que les propostes presentades superen als mètodes de l'estat de l'art.Oliver Moll, J. (2015). PERSON RE-IDENTIFICATION USING RGB-DEPTH CAMERAS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59227TESI

    Detección de Personas en secuencias de vídeo en tiempo real

    Full text link
    [ES] En este trabajo se presenta un estudio sobre la viabilidad en el uso de una combinación de métodos para detección de personas en secuencias de vídeo en tiempo real. Se utilizan técnicas de boosting para una primera selección de candidatos en la escena, y máquinas de extracción y clasificación de características que actúan sobre estos candidatos proporcionando la predicción final. Los métodos de boosting proporcionan resultados rápidos aunque con un elevado número de falsas alarmas. Por otro lado, los métodos de extracción y clasificación de características presentan una elevada tasa de acierto a costa de un elevado coste computacional. La combinación de estos dos métodos logra unas prestaciones finales que combinan los aspectos positivos de ambas técnicas, y por lo tanto se establece como un sistema robusto a la vez que abierto a nuevas mejoras.[EN] This document describes a combination of methods for vision-based pedestrian detection in real time. A preliminary selection of candidates in the scene is provided by boosting techniques. Then, a feature extraction and later classification using Vector Support Machines is carried out over these candidates, providing the final prediction. Boosting methods have been demonstrated to yield rapid but inaccurate predictions since they present a high rate of false alarms. On the other hand, feature-based methods present good performances in terms of accuracy, though a high computational cost is needed. Overall, the combination of both methods provides good results since the global system takes advantage of the strengths of each individual method. Moreover, the system is open to further improvementsOliver Moll, J. (2007). Detección de Personas en secuencias de vídeo en tiempo real. http://hdl.handle.net/10251/12624Archivo delegad

    A Hybrid Real-Time Vision-Based Person Detection Method

    Full text link
    [EN] In this paper, we introduce a hybrid real-time method for vision-based pedestrian detection made up by the sequential combination of two basic methods applied in a coarse to fine fashion. The proposed method aims to achieve an improved balance between detection accuracy and computational load by taking advantage of the strengths of these basic techniques. Haar-like features combined with Boosting techniques, which have been demonstrated to provide rapid but not accurate enough results in human detection, are used in the first stage to provide a preliminary candidate selection in the scene. Then, feature extraction and classification methods, which present high accuracy rates at expenses of a higher computational cost, are applied over boosting candidates providing the final prediction. Experimental results show that the proposed method performs effectively and efficiently, which supports its suitability for real applications.This work is supported by CASBLIP project 6-th FP\cite{RefCASBLIP}. The authors acknowledge the support of the Technological Institute of Optics, Colour and Imaging of Valencia - AIDO. Dr. Samuel Morillas acknowledges the support of Generalitat Valenciana under grant GVPRE/2008/257 and Universitat Politècnica de València under grant Primeros Proyetos de Investigación 13202. }Oliver Moll, J.; Albiol Colomer, A.; Morillas, S.; Peris Fajarnes, G. (2011). A Hybrid Real-Time Vision-Based Person Detection Method. Waves. 86-95. http://hdl.handle.net/10251/57676S869

    Using latent features for short-term person re-identification with RGB-D cameras

    Full text link
    This paper presents a system for people re-identification in uncontrolled scenarios using RGB-depth cameras. Compared to conventional RGB cameras, the use of depth information greatly simplifies the tasks of segmentation and tracking. In a previous work, we proposed a similar architecture where people were characterized using color-based descriptors that we named bodyprints. In this work, we propose the use of latent feature models to extract more relevant information from the bodyprint descriptors by reducing their dimensionality. Latent features can also cope with missing data in case of occlusions. Different probabilistic latent feature models, such as probabilistic principal component analysis and factor analysis, are compared in the paper. The main difference between the models is how the observation noise is handled in each case. Re-identification experiments have been conducted in a real store where people behaved naturally. The results show that the use of the latent features significantly improves the re-identification rates compared to state-of-the-art works.The work presented in this paper has been funded by the Spanish Ministry of Science and Technology under the CICYT contract TEVISMART, TEC2009-09146.Oliver Moll, J.; Albiol Colomer, A.; Albiol Colomer, AJ.; Mossi García, JM. (2016). Using latent features for short-term person re-identification with RGB-D cameras. Pattern Analysis and Applications. 19(2):549-561. https://doi.org/10.1007/s10044-015-0489-8S549561192http://kinectforwindows.org/http://www.gpiv.upv.es/videoresearch/personindexing.htmlAlbiol A, Albiol A, Oliver J, Mossi JM (2012) Who is who at different cameras. Matching people using depth cameras. Comput Vis IET 6(5):378–387Bak S, Corvee E, Bremond F, Thonnat M (2010) Person re-identification using haar-based and dcd-based signature. In: 2nd workshop on activity monitoring by multi-camera surveillance systems, AMMCSS 2010, in conjunction with 7th IEEE international conference on advanced video and signal-based surveillance, AVSS. AVSSBak S, Corvee E, Bremond F, Thonnat M (2010) Person re-identification using spatial covariance regions of human body parts. In: Seventh IEEE international conference on advanced video and signal based surveillance. pp. 435–440Bak S, Corvee E, Bremond F, Thonnat M (2011) Multiple-shot human re-identification by mean riemannian covariance grid. In: Advanced video and signal-based surveillance. Klagenfurt, Autriche. http://hal.inria.fr/inria-00620496Baltieri D, Vezzani R, Cucchiara R, Utasi A, BenedeK C, Szirányi T (2011) Multi-view people surveillance using 3d information. In: ICCV workshops. pp. 1817–1824Barbosa BI, Cristani M, Del Bue A, Bazzani L, Murino V (2012) Re-identification with rgb-d sensors. In: First international workshop on re-identificationBasilevsky A (1994) Statistical factor analysis and related methods: theory and applications. Willey, New YorkBäuml M, Bernardin K, Fischer k, Ekenel HK, Stiefelhagen R (2010) Multi-pose face recognition for person retrieval in camera networks. In: International conference on advanced video and signal-based surveillanceBazzani L, Cristani M, Perina A, Farenzena M, Murino V (2010) Multiple-shot person re-identification by hpe signature. In: Proceedings of the 2010 20th international conference on pattern recognition. Washington, DC, USA, pp. 1413–1416Bird ND, Masoud O, Papanikolopoulos NP, Isaacs A (2005) Detection of loitering individuals in public transportation areas. IEEE Trans Intell Transp Syst 6(2):167–177Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer, SecaucusCha SH (2007) Comprehensive survey on distance/similarity measures between probability density functions. Int J Math Models Methods Appl Sci 1(4):300–307Cheng YM, Zhou WT, Wang Y, Zhao CH, Zhang SW (2009) Multi-camera-based object handoff using decision-level fusion. In: Conference on image and signal processing. pp. 1–5Dikmen M, Akbas E, Huang TS, Ahuja N (2010) Pedestrian recognition with a learned metric. In: Asian conference in computer visionDoretto G, Sebastian T, Tu P, Rittscher J (2011) Appearance-based person reidentification in camera networks: problem overview and current approaches. J Ambient Intell Humaniz Comput 2:1–25Farenzena M, Bazzani L, Perina A, Murino V, Cristani M (2010) Person re-identification by symmetry-driven accumulation of local features. In: Proceedings of the 2010 IEEE computer society conference on computer vision and pattern recognition (CVPR 2010). IEEE Computer Society, San Francisco, CA, USAFodor I (2002) A survey of dimension reduction techniques. Technical report. Lawrence Livermore National LaboratoryFreund Y, Iyer R, Schapire RE, Singer Y (2003) An efficient boosting algorithm for combining preferences. J Mach Learn Res 4:933–969Gandhi T, Trivedi M (2006) Panoramic appearance map (pam) for multi-camera based person re-identification. Advanced Video and Signal Based Surveillance, IEEE Conference on, p. 78Garcia J, Gardel A, Bravo I, Lazaro J (2014) Multiple view oriented matching algorithm for people reidentification. Ind Inform IEEE Trans 10(3):1841–1851Gheissari N, Sebastian TB, Hartley R (2006) Person reidentification using spatiotemporal appearance. CVPR 2:1528–1535Gray D, Brennan S, Tao H (2007) Evaluating appearance models for recognition, reacquisition, and tracking. In: Proceedings of IEEE international workshop on performance evaluation for tracking and surveillance (PETS)Gray D, Tao H (2008) Viewpoint invariant pedestrian recognition with an ensemble of localized features. In: Proceedings of the 10th european conference on computer vision: part I. Berlin, pp. 262–275 (2008)Ilin A, Raiko T (2010) Practical approaches to principal component analysis in the presence of missing values. J Mach Learn Res 99:1957–2000Javed O, Shafique O, Rasheed Z, Shah M (2008) Modeling inter-camera space–time and appearance relationships for tracking across non-overlapping views. Comput Vis Image Underst 109(2):146–162Kai J, Bodensteiner C, Arens M (2011) Person re-identification in multi-camera networks. In: Computer vision and pattern recognition workshops (CVPRW), 2011 IEEE computer society conference on, pp. 55–61Kuo CH, Huang C, Nevatia R (2010) Inter-camera association of multi-target tracks by on-line learned appearance affinity models. Proceedings of the 11th european conference on computer vision: part I, ECCV’10. Springer, Berlin, pp 383–396Lan R, Zhou Y, Tang YY, Chen C (2014) Person reidentification using quaternionic local binary pattern. In: Multimedia and expo (ICME), 2014 IEEE international conference on, pp. 1–6Loy CC, Liu C, Gong S (2013) Person re-identification by manifold ranking. In: icip. pp. 3318–3325Madden C, Cheng E, Piccardi M (2007) Tracking people across disjoint camera views by an illumination-tolerant appearance representation. Mach Vis Appl 18:233–247Mazzon R, Tahir SF, Cavallaro A (2012) Person re-identification in crowd. Pattern Recogn Lett 33(14):1828–1837Oliveira IO, Souza Pio JL (2009) People reidentification in a camera network. In: Eighth IEEE international conference on dependable, autonomic and secure computing. pp. 461–466Papadakis P, Pratikakis I, Theoharis T, Perantonis SJ (2010) Panorama: a 3d shape descriptor based on panoramic views for unsupervised 3d object retrieval. Int J Comput Vis 89(2–3):177–192Prosser B, Zheng WS, Gong S, Xiang T (2010) Person re-identification by support vector ranking. In: Proceedings of the British machine vision conference. BMVA Press, pp. 21.1–21.11Roweis S (1998) Em algorithms for pca and spca. In: Advances in neural information processing systems. MIT Press, Cambridge, pp. 626–632 (1998)Pedagadi S, Orwell J, Velastin S, Boghossian B (2013) Local fisher discriminant analysis for pedestrian re-identification. In: CVPR. pp. 3318–3325Satta R, Fumera G, Roli F (2012) Fast person re-identification based on dissimilarity representations. Pattern Recogn Lett, Special Issue on Novel Pattern Recognition-Based Methods for Reidentification in Biometric Context 33:1838–1848Tao D, Jin L, Wang Y, Li X (2015) Person reidentification by minimum classification error-based kiss metric learning. Cybern IEEE Trans 45(2):242–252Tipping ME, Bishop CM (1999) Probabilistic principal component analysis. J R Stat Soc Ser B 61:611–622Tisse CL, Martin L, Torres L, Robert M (2002) Person identification technique using human iris recognition. In: Proceedings of vision interface, pp 294–299Vandergheynst P, Bierlaire M, Kunt M, Alahi A (2009) Cascade of descriptors to detect and track objects across any network of cameras. Comput Vis Image Underst, pp 1413–1416Verbeek J (2009) Notes on probabilistic pca with missing values. Technical reportWang D, Chen CO, Chen TY, Lee CT (2009) People recognition for entering and leaving a video surveillance area. In: Fourth international conference on innovative computing, information and control. pp. 334–337Zhang Z, Troje NF (2005) View-independent person identification from human gait. Neurocomputing 69:250–256Zhao T, Aggarwal M, Kumar R, Sawhney H (2005) Real-time wide area multi-camera stereo tracking. In: IEEE computer society conference on computer vision and pattern recognition. pp. 976–983Zheng S, Xie B, Huang K, Tao D (2011) Multi-view pedestrian recognition using shared dictionary learning with group sparsity. In: Lu BL, Zhang L, Kwok JT (eds) ICONIP (3), Lecture notes in computer science, vol 7064. Springer, New York, pp. 629–638Zheng WS, Gong S, Xiang T (2011) Person re-identification by probabilistic relative distance comparison. In: Computer vision and pattern recognition (CVPR), 2011 IEEE conference on. pp. 649–65

    Research and Design of a Routing Protocol in Large-Scale Wireless Sensor Networks

    Get PDF
    无线传感器网络,作为全球未来十大技术之一,集成了传感器技术、嵌入式计算技术、分布式信息处理和自组织网技术,可实时感知、采集、处理、传输网络分布区域内的各种信息数据,在军事国防、生物医疗、环境监测、抢险救灾、防恐反恐、危险区域远程控制等领域具有十分广阔的应用前景。 本文研究分析了无线传感器网络的已有路由协议,并针对大规模的无线传感器网络设计了一种树状路由协议,它根据节点地址信息来形成路由,从而简化了复杂繁冗的路由表查找和维护,节省了不必要的开销,提高了路由效率,实现了快速有效的数据传输。 为支持此路由协议本文提出了一种自适应动态地址分配算——ADAR(AdaptiveDynamicAddre...As one of the ten high technologies in the future, wireless sensor network, which is the integration of micro-sensors, embedded computing, modern network and Ad Hoc technologies, can apperceive, collect, process and transmit various information data within the region. It can be used in military defense, biomedical, environmental monitoring, disaster relief, counter-terrorism, remote control of haz...学位:工学硕士院系专业:信息科学与技术学院通信工程系_通信与信息系统学号:2332007115216

    Biomimetic Dispersive Solid-Phase Microextraction as a Novel Concept for High-Throughput Estimation of Human Oral Absorption of Organic Compounds

    No full text
    Statistical data of the fourteen in-vitro MLR models evaluated for the prediction of the effective permeability of organic compounds across the human intestine, presented in Table S1 and Table 2 of the article Analytical Chemistry, 95, 2023, 13123−13131. (DOI: 10.1021/acs.analchem.3c01749

    Deep Learning for Skin Melanocytic Tumors in Whole-Slide Images: A Systematic Review

    No full text
    The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios

    Deep Learning for Skin Melanocytic Tumors in Whole-Slide Images: A Systematic Review

    Full text link
    [EN] Simple Summary Deep learning (DL) is expanding into the surgical pathology field and shows promising outcomes in diminishing subjective interpretations, especially in dermatopathology. We aim to show the efforts of implementing DL models for melanocytic tumors in whole slide images. Four electronic databases were systematically searched, and 28 studies were identified. Our analysis revealed four research trends: DL models vs. pathologists, diagnostic prediction, prognosis, and regions of interest. We also highlight relevant issues that must be considered to implement these models in real scenarios taking into account pathologists' and engineers' perspectives. The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios.This work has received funding from the European Union's Horizon 2020 Programme for Research and Innovation, under the Marie Sklodowska Curie grant agreement No. 860627 (CLARIFY). The work is also supported by project INNEST/2021/321 (SAMUEL), PAID-10-21 - Subprograma 1 and PAID-PD-22 for postdoctoral research, and PI20/00094, Instituto de Salud Carlos III, y Fondos Europeos FEDER.Mosquera-Zamudio, A.; Launet, L.; Tabatabaei, Z.; Parra-Medina, R.; Colomer, A.; Oliver Moll, J.; Monteagudo, C.... (2023). Deep Learning for Skin Melanocytic Tumors in Whole-Slide Images: A Systematic Review. Cancers. 15(1):1-19. https://doi.org/10.3390/cancers1501004211915
    corecore